Focused on the issue that the traditional interest area based visualization method can not pay attention to the details in the process of analyzing pilot eye movement data, a visual analysis method of eye movement data based on user-defined interest area was proposed. Firstly, according to the specific analysis task, the self-divison and self-definition of the background image of the task were introduced. Then, multiple auxiliary views and interactive approaches were combined, and an eye movement data visual analysis system for pilot training was designed and implemented to help analysts analyze the difference of eye movement between different pilots. Finally, through case analysis, the effectiveness of the visual analysis method and the practicability of the analysis system were proved. The experimental results show that compared with the traditional method, in the proposed method, the analysts' initiative in the analysis process is increased. The analysts are allowed to explore the local details of the task background in both global and local aspects, making the analysts' analyze the data in multi-angle; the analysts are allowed find the flight students' cognitive difficulties in the training process as a whole, so as to develop more targeted and more effective training courses.
Multi-exposure image fusion technology directly combines a sequence of images with the same scene but different exposure levels into a high-quality image with more details of scene. Aiming at the problems of poor local contrast difference and color distortion of existing algorithms, a new multi-exposure image fusion algorithm was proposed based on Retinex theoretical model. Firstly, based on Retinex theoretical model, the exposure sequence images were divided into an illumination component sequence and a reflection component sequence by using the illumination estimation algorithm, and then two sets of sequences were processed by different fusion methods. For the illumination component, the variation characteristics of global brightness of scene were guaranteed and the effects of overexposed and underexposed regions were weakened, while for the reflection component, the evaluation parameters of moderate exposure were used to better preserve the color and detail information of scene. The proposed algorithm was analyzed from both subjective and objective aspects. The experimental results show that compared with traditional algorithm based on image domain synthesis, the proposed algorithm has an average increase of 1.7% in Structural SIMilarity (SSIM) and has better effect in the processing of image color and local details.